In the fields of chronology and periodization, an epoch is an instance in time chosen as the origin of a particular era. The "epoch" then serves as a reference point from which time is measured. Time measurement units are counted from the epoch so that the date and time of events can be specified unambiguously.
Events taking place before the epoch can be dated by counting negatively from the epoch, though in pragmatic periodization practice, epochs are defined for the past, and another epoch is used to start the next era, therefore serving as the ending of the older preceding era. The whole purpose and criteria of such definitions is to clarify and co-ordinate scholarship about a period, at times, across disciplines.
Epochs are generally chosen to be convenient or significant by a consensus of the time scale's initial users, or by authoritarian fiat. The epoch moment or date is usually defined by a specific clear event, condition, or criteria— the epoch event or epoch criteria —from which the period or era or age is usually characterized or described.
Examples:
Contents |
Each calendar era starts from an arbitrary epoch, which is often chosen to commemorate an important historical or mythological event. For example, the epoch of the anno Domini calendar era (the civil calendar era used internationally and in many countries) is the traditionally-reckoned Incarnation of Jesus.[1] Many other current and historical calendar eras exist, each with its own epoch.
In astronomy, an epoch is a specific moment in time for which celestial coordinates or orbital elements are specified, and from which other orbital parametrics are thereafter calculated in order to predict future position. The applied tools of the mathematics disciplines of Celestial mechanics or its subfield Orbital mechanics (both predict orbital paths and positions) about a center of gravity are used to generate an ephemeris (plural: ephemerides; from the Greek word ephemeros = daily) which is a table of values that gives the positions of astronomical objects in the sky at a given time or times, or a formula to calculate such given the proper time offset from the epoch. Such calculations generally result in an elliptical path on a plane defined by some point on the orbit, and the two foci of the ellipse. Viewing from another orbiting body, following its own trace and orbit, creates shifts in three dimensions in the spherical trigonometry used to calculate relative positions. Interestingly, these dynamics in three dimensions are also elliptical, which means the ephemeris need only specify one set of equations to be a useful predictive tool to predict future location of the object of interest.
Over time, inexactitudes and other errors accumulate, creating more and greater errors of prediction, so ephemeris factors need to be recalculated from time to time, and that requires a new epoch to be defined. Different astronomers or groups of astronomers used to define epochs to suit themselves, but these days of speedy communications, the epochs are generally defined in an international agreement, so astronomers world wide can collaborate more effectively. It was inefficient and error prone for data observed by one group to need translation (mathematic transformation) so other groups could compare information.
The current standard epoch is called "J2000.0" (and is approximately noon January 1, 2000, Gregorian calendar, at the Royal Observatory, Greenwich, in London England). This is equivalent to:
When dates or times are expressed as years with a decimal fraction from J2000, the years are of exactly 365.25 days, which is the average length of a year in the Julian calendar.
The time kept internally by a computer system is usually expressed as the number of time units that have elapsed since a specified epoch, which is nearly always specified as midnight Universal Time on some particular date.
Software timekeeping systems vary widely in the granularity of their time units; some systems may use time units as large as a day, while others may use nanoseconds. For example, for an epoch date of midnight UTC on January 1, 1900, and a time unit of a second, the time of the midnight between January 1 and 2, 1900 is represented by the number 86400, the number of seconds in one day. When times prior to the epoch need to be represented, it is common to use the same system, but with negative numbers.
These representations of time are mainly for internal use. If an end user interaction with dates and times is required, the software will nearly always convert this internal number into a date and time representation that is comprehensible to humans.
The following table lists epoch dates used by popular software and other computer-related systems. The time in these systems is stored as the quantity of a particular time unit (days, seconds, nanoseconds, etc.) that has elapsed since a stated time (usually midnight UTC at the beginning of the given date).
Epoch date | Notable uses | Rationale for selection |
---|---|---|
January 1, 0 | MATLAB,[8] Symbian, Turbo DB and tdbengine | |
January 1, 1 | Microsoft .NET, REXX, Dershowitz and Reingold source code (where it is known as Rata Die)[9] | |
January 1, 1601 | NTFS, COBOL, Win32/Win64 | 1601 was the first year of the 400-year Gregorian calendar cycle at the time Windows NT was made.[10] |
January 1, 1753 | Microsoft SQL Server | First full year after the adoption of the Gregorian calendar by Britain and her Colonies. |
December 31, 1840 | MUMPS programming language | 1841 was a non-leap year several years before the birth year of the oldest living US citizen when the language was designed.[11] |
November 17, 1858 | VMS, United States Naval Observatory, other astronomy-related computations | November 17, 1858 equals the Julian Day 2,400,000.[12] |
December 30, 1899 | Microsoft COM DATE, Object Pascal | Technical internal value used by Microsoft Excel; to simplify calculations by falsely assuming 1900 to be a leap year.[13] |
January 0, 1900 | Microsoft Excel,[13] Lotus 1-2-3 | While logically January 0, 1900 is equivalent to December 31, 1899, these systems do not allow users to specify the latter date. |
January 1, 1900 | Network Time Protocol, IBM CICS, Mathematica, RISC OS, Common Lisp | |
January 1, 1904 | LabVIEW, Apple Inc.'s Mac OS through version 9, Palm OS, MP4, Microsoft Excel (optionally),[13] IGOR Pro | 1904 is the first leap year of the twentieth century.[14] |
January 1, 1950 | SEGA Dreamcast | |
January 1, 1960 | S-Plus, SAS | |
December 31, 1967 | Pick OS | Chosen so that remainder 7 would produce 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday.[15] |
January 1, 1970 | Unix time, used by Unix and Unix-like systems (Linux, Mac OS X), and programming languages: C, Java, JavaScript, Perl, PHP, Python. Also used by Precision Time Protocol. | |
January 1, 1978 | AmigaOS | |
January 1, 1980 | DOS, OS/2, FAT16 and FAT32 filesystems, VOS | |
January 6, 1980 | Qualcomm BREW, GPS | |
January 1, 1981 | Acorn NetFS | |
January 1, 1984 | CiA® (CAN in Automation) CANopen® TIME_STAMP | |
January 1, 2000 | AppleSingle, AppleDouble[16] | |
? ?, 2000 | FATX filesystem | |
January 1, 2001 | Apple's Cocoa framework | 2001 is the year of the release of Mac OS X 10.0. |
Computers don't generally store arbitrarily large numbers. Instead, each number stored by a computer is allotted a fixed amount of space. Therefore, when the number of time units that have elapsed since a system's epoch exceeds the largest number that can fit in the space allotted to the time representation, the time representation overflows, and problems can occur. While a system's behavior after overflow occurs is not necessarily predictable, in most systems the number representing the time will reset to zero, and the computer system will think that the current time is the epoch time again.
Most famously, older systems which counted time as the number of years elapsed since the epoch of January 1, 1900 and which only allotted enough space to store the numbers 0 through 99, experienced the Year 2000 problem. These systems (if not corrected beforehand) would interpret the date January 1, 2000 as January 1, 1900, leading to unpredictable errors at the beginning of the year 2000.
Even systems which allocate more storage to the time representation are not immune from this kind of error. Many Unix-like operating systems which keep time as seconds elapsed from the epoch date of January 1, 1970, and allot timekeeping enough storage to store numbers as large as 2 147 483 647 will experience an overflow problem on January 19, 2038 if not fixed beforehand. This is known as the Year 2038 problem. A correction involving doubling the storage allocated to timekeeping on these systems will allow them to represent dates more than 290 billion years into the future.
Other more subtle timekeeping problems exist in computing, such as accounting for leap seconds, which are not observed with any predictability or regularity. Additionally, applications which need to represent historical dates and times (for example, representing a date prior to the switch from the Julian calendar to the Gregorian calendar) must use specialized timekeeping libraries.
Finally, some software must maintain compatibility with older software that does not keep time in strict accordance with traditional timekeeping systems. For example, Microsoft Excel observes the fictional date of February 29, 1900 in order to maintain compatibility with older versions of Lotus 1-2-3.[13] Lotus 1-2-3 observed the date due to an error; by the time the error was discovered, it was too late to fix it—"a change now would disrupt formulas which were written to accommodate this anomaly".[17]
|